自动音频标题(AAC)是一种跨模型翻译任务,旨在使用自然语言来描述音频剪辑的内容。如在DCEAD 2021挑战的任务6所接收的提交所示,这一问题已受到越来越兴趣的社区。现有的AAC系统通常基于编码器解码器架构,其中音频信号被编码为潜像表示,并与其对应的文本描述对齐,则使用解码器来生成标题。然而,AAC系统的培训经常遇到数据稀缺问题,这可能导致不准确的表示和音频文本对齐。为了解决这个问题,我们提出了一种名为对比损耗的新型编码器解码器框架(CL4AC)。在CL4AC中,通过对比样本来利用来自原始音频文本成对数据的自我监督信号来利用音频和文本之间的对应关系,该样本可以提高潜在表示的质量和音频和文本之间的对齐,同时训练有限的数据。实验是在披丁数据集上进行的,以显示我们提出的方法的有效性。
translated by 谷歌翻译
Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association's 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare.
translated by 谷歌翻译
Persona-based dialogue systems aim to generate consistent responses based on historical context and predefined persona. Unlike conventional dialogue generation, the persona-based dialogue needs to consider both dialogue context and persona, posing a challenge for coherent training. Specifically, this requires a delicate weight balance between context and persona. To achieve that, in this paper, we propose an effective framework with Persona-Adaptive Attention (PAA), which adaptively integrates the weights from the persona and context information via our designed attention. In addition, a dynamic masking mechanism is applied to the PAA to not only drop redundant information in context and persona but also serve as a regularization mechanism to avoid overfitting. Experimental results demonstrate the superiority of the proposed PAA framework compared to the strong baselines in both automatic and human evaluation. Moreover, the proposed PAA approach can perform equivalently well in a low-resource regime compared to models trained in a full-data setting, which achieve a similar result with only 20% to 30% of data compared to the larger models trained in the full-data setting. To fully exploit the effectiveness of our design, we designed several variants for handling the weighted information in different ways, showing the necessity and sufficiency of our weighting and masking designs.
translated by 谷歌翻译
在不完整的数据集中对样本进行分类是机器学习从业人员的普遍目的,但并非平凡。在大多数现实世界数据集中发现缺失的数据,这些缺失值通常是使用已建立的方法估算的,然后进行分类现在完成,估算的样本。然后,机器学习研究人员的重点是优化下游分类性能。在这项研究中,我们强调必须考虑插补的质量。我们展示了如何评估质量的常用措施有缺陷,并提出了一类新的差异评分,这些分数着重于该方法重新创建数据的整体分布的程度。总而言之,我们强调了使用不良数据训练的分类器模型的可解释性损害。
translated by 谷歌翻译
准确的实时流量预测对于智能运输系统(ITS)至关重要,它是各种智能移动应用程序的基石。尽管该研究领域以深度学习为主,但最近的研究表明,开发新模型结构的准确性提高正变得边缘。取而代之的是,我们设想可以通过在具有不同数据分布和网络拓扑的城市之间转移“与预测相关的知识”来实现改进。为此,本文旨在提出一个新型的可转移流量预测框架:域对抗空间 - 颞网(DASTNET)。 Dastnet已在多个源网络上进行了预训练,并通过目标网络的流量数据进行了微调。具体而言,我们利用图表表示学习和对抗域的适应技术来学习域不变的节点嵌入,这些嵌入式嵌入将进一步合并以建模时间流量数据。据我们所知,我们是第一个使用对抗性多域改编来解决网络范围的流量预测问题的人。 Dastnet始终优于三个基准数据集上的所有最新基线方法。训练有素的dastnet应用于香港的新交通探测器,并且在可用的探测器可用时(一天之内)可以立即(在一天之内)提供准确的交通预测。总体而言,这项研究提出了一种增强交通预测方法的替代方法,并为缺乏历史流量数据的城市提供了实际含义。
translated by 谷歌翻译
引用图像分割是一种基本愿景 - 语言任务,旨在分割由图像中的自然语言表达式引用的对象。这项任务背后的一个关键挑战是利用引用表达式来突出显示图像中的相关位置。解决此问题的范例是利用强大的视觉语言(“跨模型”)解码器到从视觉编码器和语言编码器独立提取的保险丝特征。最近的方法通过利用变换器作为跨模型解码器,并将变换器在许多其他视觉语言任务中的压倒性成功的同时进行了显着的进步。在这项工作中采用不同的方法,我们表明,通过在视觉变压器编码器网络的中间层中的语言和视觉特征的早期融合,可以实现更好的跨模型对准。通过在视觉特征编码阶段进行跨模型特征融合,我们可以利用变压器编码器的良好相关建模功率,以便挖掘有用的多模态上下文。通过这种方式,用轻型掩模预测器容易地收获精确的分段结果。没有钟声和口哨,我们的方法超越了在Refcoco,Refcoco +和G-Ref上的先前最先进的方法。
translated by 谷歌翻译
空间卷积广泛用于许多深度视频模型。它基本上假设了时空不变性,即,使用不同帧中的每个位置的共享权重。这项工作提出了用于视频理解的时间 - 自适应卷积(Tadaconv),这表明沿着时间维度的自适应权重校准是促进在视频中建模复杂的时间动态的有效方法。具体而言,Tadaconv根据其本地和全局时间上下文校准每个帧的卷积权重,使空间卷积具有时间建模能力。与先前的时间建模操作相比,Tadaconv在通过卷积内核上运行而不是特征,其维度是比空间分辨率小的数量级更有效。此外,内核校准还具有增加的模型容量。通过用Tadaconv替换Reset中的空间互联网来构建坦达2D网络,这与多个视频动作识别和定位基准测试的最先进方法相比,导致PAR或更好的性能。我们还表明,作为可忽略的计算开销的容易插入操作,Tadaconv可以有效地改善许多具有令人信服的边距的现有视频模型。 HTTPS://github.com/alibaba-mmai-research/pytorch-video -Undersing提供代码和模型。
translated by 谷歌翻译
对比学习的核心思想是区分不同的实例,并从相同实例中强制不同的视图以共享相同的表示。为了避免琐碎的解决方案,增强在生成不同视图中起重要作用,其中显示了随机裁剪来对模型来学习广义和鲁棒的表示。常用的随机作物操作保持沿着训练过程不变的两个视图之间的分布。在这项工作中,我们表明,自适应地控制沿着训练过程的两个增强视图之间的视差增强了学习的表示的质量。具体而言,我们提出了一种参数立方裁剪操作,用于视频对比度学习,其通过可分辨率的3D仿射变换自动批量3D立方。参数使用对抗目标与视频骨干同时培训,并从数据中学习最佳裁剪策略。可视化表明,参数自适应地控制了两个增强视图之间的中心距离和IOU,并且沿着训练过程的差异中的学习变化是有利于学习强烈的表示。广泛的消融研究证明了所提出的参数对多个对比学习框架和视频骨干的有效性。可以使用代码和模型。
translated by 谷歌翻译